翻訳と辞書
Words near each other
・ Information Today, Inc.
・ Information transfer
・ Information transfer node
・ Information Tribunal
・ Information Trust Institute
・ Information Ukraine
・ Information visualization
・ Information Visualization (journal)
・ Information visualization reference model
・ Information wants to be free
・ Information war during the Russo-Georgian War
・ Information warfare
・ Information Warfare Monitor
・ Information Warfare Wing RAAF
・ Information World Review
Information-based complexity
・ Information-Based Indicia
・ Information-bearer channel
・ Information-centric networking
・ Information-led development
・ Information-theoretic death
・ Information-theoretic security
・ Information-transfer transaction
・ Informational interview
・ Informational listening
・ Informational self-determination
・ Informationist
・ Informationist poetry
・ Informationsdienst gegen Rechtsextremismus
・ Informationsdienst Wissenschaft


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Information-based complexity : ウィキペディア英語版
Information-based complexity

Information-based complexity (IBC) studies optimal algorithms and computational complexity for the continuous problems which arise in physical science, economics, engineering, and mathematical finance. IBC has studied such continuous problems as path integration, partial differential equations, systems of ordinary differential equations, nonlinear equations, integral equations, fixed points, and very-high-dimensional integration. All these problems involve functions (typically multivariate) of a real or complex variable. Since one can never obtain a closed-form solution of the problems of interest one has to settle for a numerical solution. Since a function of a real or complex variable cannot be entered into a digital computer, the solution of continuous problems involves ''partial'' information. To give a simple illustration, in the numerical approximation of an integral, only samples of the integrand at a finite number of points are available. In the numerical solution of partial differential equations the functions specifying the boundary conditions and the coefficients of the differential operator can only be sampled. Furthermore, this partial information can be expensive to obtain. Finally the information is often ''contaminated'' by noise.
The goal of information-based complexity is to create a theory of computational complexity and optimal algorithms for problems with partial, contaminated and priced information, and to apply the results to answering questions in various disciplines. Examples of such disciplines include physics, economics, mathematical finance, computer vision, control theory, geophysics, medical imaging, weather forecasting and climate prediction, and statistics. The theory is developed over abstract spaces, typically Hilbert or Banach spaces, while the applications are usually for multivariate problems.
Since the information is partial and contaminated, only approximate solutions can be obtained. IBC studies computational complexity and optimal algorithms for approximate solutions in various settings. Since the worst case setting often leads to negative results such as unsolvability and intractability, settings with weaker assurances such as average, probabilistic and randomized are also studied. A fairly new area of IBC research is continuous quantum computing.
==Overview==
We illustrate some important concepts with a very simple example, the computation of
::::\int_0^1 f(x)\,dx.
For most integrands we can't use the fundamental theorem of calculus to compute the integral analytically; we have to approximate it numerically. We compute the values of f at ''n'' points
::::().
The ''n'' numbers are the partial information about the true integrand f(x). We combine these ''n'' numbers by a combinatory algorithm to compute an approximation to the integral. See the monograph (Complexity and Information ) for particulars.
Because we have only partial information we can use an ''adversary argument'' to tell us how large ''n'' has to be to compute an \epsilon-approximation. Because of these information-based arguments we can often obtain tight bounds on the complexity of continuous problems. For discrete problems such as integer factorization or the travelling salesman problem we have settle for conjectures about the complexity hierarchy. The reason is that the input is a number or a vector of numbers and can thus be entered into the computer. Thus there is typically no adversary argument at the information level and the complexity of a discrete problem is rarely known.
The univariate integration problem was for illustration only. Significant for many applications is multivariate integration. The number of variables is in the hundreds or thousands. The number of variables may even be infinite; we then speak of path integration. The reason that integrals are important in many disciplines is that they occur when we want to know the expected behavior of a continuous process. See for example, the application to mathematical finance below.
Assume we want to compute an integral in ''d'' dimensions (dimensions and variables are used interchangeably) and that we want to guarantee an error at most \epsilon for any integrand in some class. The computational complexity of the problem is known to be of order \epsilon^. (Here we are counting the number of function evaluations and the number of arithmetic operations so this is the time complexity.) This would take many years for even modest values of d. The exponential dependence on ''d'' is called the ''curse of dimensionality''. We say the problem is intractable.
We stated the curse of dimensionality for integration. But exponential dependence on ''d'' occurs for almost every continuous problem that has been investigated. How can we try to vanquish the curse? There are two possibilities:
* We can weaken the guarantee that the error must be less than \epsilon (worst case setting) and settle for a stochastice assurance. For example, we might only require that the expected error be less than \epsilon (average case setting.) Another possibility is the randomized setting. For some problems we can break the curse of dimensionality by weakening the assurance; for others, we cannot. There is a large IBC literature on results in various settings; see Where to Learn More below.
* We can incorporate domain knowledge. See An Example: Mathematical Finance below.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Information-based complexity」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.